perm filename HILTS.LE1[LET,JMC] blob
sn#456666 filedate 1979-07-07 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .require "let.pub" source
C00018 ENDMK
C⊗;
.require "let.pub" source
.COUNT ITEM
.AT "#" ⊂NEXT ITEM;(ITEM!);⊃;
∂AIL Mr. Phil Hilts↓5761 Harwich Ct.↓Alexandria, Virginia 22311∞
.<<703 751-8990>>
Dear Phil:
Sorry about not getting around sooner to your manuscript, but
I wouldn't otherwise live up to your image of my eccentricity.
I don't see myself as eccentric as you make me, but perhaps
others do. Anyway I don't really object, but it has the disadvantage
that it may make others accept more inattentiveness from me than
they need do. Oh, well. Here are some comments.
#. When you cite my opinion on page 4 that robots won't be
just as smart as people, you combine one opinion that even a
skeptic about AI should accept (that precise equality is unlikely) with
another (that at least equality will someday be achieved). If you
started it, %2"When and if equality is achieved, they will soon be
smarter ..."%1, that would be avoided. A picky point.
#. Marvin Minsky was a cofounder of the M.I.T. AI Lab.
#. You seem to accept Archytas's and Albertus's alleged
accomplishments. It seems to me that the law of conservation of
energy would be
violated if something powered by weights could fly, and Albertus's
android is not quite feasible even today. In fact, Albertus's android
is what
Tony Reichelt seems to be falsely claiming.
Someone, I forget who, has taken the trouble to separate fact
from fancy in the stories of pre-modern automata, but if you
haven't time to find the book, you should indicate some skepticism.
I think maybe that Heinz Zemanek of Vienna (he used to be at the
IBM Research Center there) wrote a book about it. Anyway he is
an expert on renaissance and 18th century automata.
#. In my opinion, Leonardo understood kinematics very well,
but he didn't even have an intuitive feeling for the dynamics
later discovered by Galileo, Newton, etc. For this reason, many of Leonardo's
proposed machines were wrongly proportioned and couldn't work. This
is particularly true of his flying machines. Compare the Gossamer
Condor's 70 foot wingspan with Leonardo's drawings showing comparatively
tiny wings.
We might be in the same position with respect to some of the components
of intelligence, but it would be too neat if our errors turned out
to be precisely analogous to his.
#. Turing is not the first person to get caught in a semi-infinite
cycle of tool making.
#. Turing's 1936 paper defined Turing machines (the name came
later) in general. He defined a universal machine that could
simulate any other Turing machine in order to solve problems in
the mathematical theory of computability. His 1936 paper didn't
suggest actually building the machine. During the war, he
took some of the initiatives in developing the electronic machines
that were used in breaking the German Enigma ciphers, and he did
initiate the ACE and DEUCE computers right after the war.
#. Instead of %2"McCarthy thought the idea was premature"%1,
it would be more correct to say, %2"McCarthy couldn't bring the idea
into a form which he thought would actually learn anything significant,
and he still hasn't%1. You should also say "mental experiments", since
I didn't physically build anything - in contrast to Marvin Minsky who
built an experimental learning machine called the Snark.
#. My study of students who entered M.I.T. at an early age
was done when I was an assistant professor there - about 1960.
I had been readmitted to Caltech when I was drafted.
#. I don't recall proposing a low altitude stall to Fredkin,
and I never did get to jumping from 10,000 feet.
#. Goguen was referring to Peter Landin - not Landon.
#. Lisp wasn't the second programming language developed.
It is the second oldest still in use after Fortran.
#. (p. 24). It doesn't seem that the essence of language is
abbreviation. Abbreviation of what? A more prolix language?
#. I don't think your charge of "deliberate obfuscation by
programmers" will stand up. While mathematical ability is not the
same as programming ability, programming any extensive task is
genuinely difficult and tedious. Good programming languages help
but don't make the task easy. Boden's analogy with knitting is good
up to a point. The extension of the simple knitting language to
a language for programming a knitting machine would
make an even closer parallel.
There is no gross error here, and your statement of the matter
is good enough to be helpful.
#. In my opinion, the reason for the over optimism around 1960
was a mistaken belief that all that was required for intelligence was
a good search program. If this had been right, Simon's predictions
would have been realized. My 1959 paper, %2Programs with Common
Sense%1, already suggested that more was required. However,
I too was over optimistic about how long that would take. Mainly
I thought that more smart people would be attracted to the problem,
but actually epistemological problems of AI are only beginning to
attract attention. Come to think of it, Simon has also said that
he expected that work on chess would be more serious than actually
happened.
#. Heuristics and epistemology are not rivals but
complementary, and both are needed for successful AI.
I work on epistemology, because it suits my talents,
and also because hardly anyone else is doing it.
#. A human going to Duluth doesn't ask himself what "get to"
means unless he is doing philosophy. The meaning of "get to" is
built in for all but the tiniest babies, i.e. the meaning is
embodied in program rather than data. Otherwise, there would be
an infinite regress; the program would have to ask itself what "mean"
means and so on. The regress is stops at knowledge that is embodied
in program.
This point seems to be not well known to philosophers.
#. So far as I know "computist" is not an accepted English word.
#. I left M.I.T. in 1962.
#. I left M.I.T. for three reasons even though M.I.T. offered
me slightly more money if I would stay and hinted that the full professorship
would probably follow soon. (1) I prefer California. (2)
Getting the full professorship leaves a certain kind of ambition
behind. (3) I was frustrated by the fact that the M.I.T. administration
was stalling on the recommendation of a committee I chaired that
M.I.T. move to time-sharing for its next generation computer. What's
worse, this stalling took the form of asking us to do the report over -
providing more substantiation of the need for time-sharing - redoing
the market survey among ditch diggers about the need for steam shovels.
#. The volcano was only 14,600 feet high. I had mainly quit climbing
before I met Vera, although we actually met in connection with a
small climb. After we met, my climbing somewhat revived, and I expected
that after the Annapurna expedition, we would climb a bit more together,
but I did not expect to make the great effort that would be required before
I could climb at anything like her level.
#. It occurs to me that the reader might think we are claiming
that the secretarial uses of computers are unique to our lab. Perhaps
they were in the late 60s and early 70s.
#. It might be better if you left out the part about the suicide
note. It is true, but there is another user of our computer who has
some suicidal tendencies, and leaves notes suggesting that he is
about ready to end it all. Reading about it in print might set him
off again.
I don't say it's likely, but it just might.
#. Vera had sufficient experience for the Annapurna climb, I
think, and she got more on the climb. Climbers tend to be sensitive
to criticism when an accident occurs, and the other members of the
expedition would not agree that Vera and Alison hadn't enough ice
experience to justify their summit attempt.
#. The story about the Levy bet is incorrect. I don't believe
I ever played Levy and would not have expected to give an International
Master a hard game. Levy was a graduate student in computer science
at Glasgow University as well as a chess master, and part of my plan
was to hire Levy to work on chess programming. He would have made
much more money from that than the amount of the bet. As it turned
out, Levy quit computer science soon after the bet and became a
professional chess writer.
My recollection is that Levy only relaxed after he won the
first game.
#. I don't want to be quoted as saying that there are practically
no new ideas in the program that played Levy. Let it rather be said
that none of the ideas that seem to me to be required to play chess
well without search hundreds of thousands of positions are present.
My student, David Wilkins, has just finished a PhD thesis on pattern
recognition in chess that can find deep combinations looking at only
a few hundred positions. If Wilkins's approach can be extended to the
full game, we may get a program that can play master level chess by
intelligence rather than brute force. Even to make brute force work
well, Slate and Atkins needed new ideas, and I don't want to denigrate
their work.
#. No computers at Stanford were ever damaged. However,
someone did put a gasoline bomb in an unused room in our building in 1970.
Fortunately, the sprinkler (more properly flooder) system put out the
fire before it did more than scorch the floor, and the attempt was
not repeated. Fortunately, the 1970s bombings were not the work
of serious terrorists but more the work of people with the psychology
of ordinary vandals - excited and and encouraged by the rhetoric
of the time. Well, maybe the German and Japanese terrorists are
just ordinary vandals excited by rhetoric.
Contrary to this theory, I doubt that computers have been especially
a target. I think there have been many more bombings of power line
and microwave communication towers.
#. It was Alison's body that was seen. All they could recognize
at that distance was the color of the down parka.
I like your manuscript, and I think it makes many useful points.
.reg